Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 517
Filter
1.
Comput Intell Neurosci ; 2022: 7105526, 2022.
Article in English | MEDLINE | ID: mdl-35281192

ABSTRACT

The present research mainly aims to use a mathematical formula to determine the optimal intervals for conducting preventive maintenance operations for machines to reduce the expected failure time when the malfunction data follow the Weibull distribution. The reliability function, failure rate, and the average time between machine failures were derived after performing preventive maintenance operations and before conducting preventive maintenance operations to state the amelioration that happens to machines. These rely on real data of performing preventive maintenance operations and the downtime required to repair machine or device faults that occur between preventive maintenance periods and the downtime necessary to perform preventive maintenance operations on the machine or device. Thus, the study concluded that preventive maintenance operations are working to increase the reliability of the machine and improve it, as well as to increase the average period of time for the machine to operate between faults.


Subject(s)
Computer Systems , Equipment Failure , Computer Systems/standards , Reproducibility of Results
2.
Public Health Rep ; 136(1_suppl): 18S-23S, 2021.
Article in English | MEDLINE | ID: mdl-34726975

ABSTRACT

In 2019, Connecticut launched an opioid overdose-monitoring program to provide rapid intervention and limit opioid overdose-related harms. The Connecticut Statewide Opioid Response Directive (SWORD)-a collaboration among the Connecticut State Department of Public Health, Connecticut Poison Control Center (CPCC), emergency medical services (EMS), New England High Intensity Drug Trafficking Area (HIDTA), and local harm reduction groups-required EMS providers to call in all suspected opioid overdoses to the CPCC. A centralized data collection system and the HIDTA overdose mapping tool were used to identify outbreaks and direct interventions. We describe the successful identification of a cluster of fentanyl-contaminated crack cocaine overdoses leading to a rapid public health response. On June 1, 2019, paramedics called in to the CPCC 2 people with suspected opioid overdose who reported exclusive use of crack cocaine after being resuscitated with naloxone. When CPCC specialists in poison information followed up on the patients' status with the emergency department, they learned of 2 similar cases, raising suspicion that a batch of crack cocaine was mixed with an opioid, possibly fentanyl. The overdose mapping tool pinpointed the overdose nexus to a neighborhood in Hartford, Connecticut; the CPCC supervisor alerted the Connecticut State Department of Public Health, which in turn notified local health departments, public safety officials, and harm reduction groups. Harm reduction groups distributed fentanyl test strips and naloxone to crack cocaine users and warned them of the dangers of using alone. The outbreak lasted 5 days and tallied at least 22 overdoses, including 6 deaths. SWORD's near-real-time EMS reporting combined with the overdose mapping tool enabled rapid recognition of this overdose cluster, and the public health response likely prevented additional overdoses and loss of life.


Subject(s)
Crack Cocaine/administration & dosage , Fentanyl/adverse effects , Opiate Overdose/diagnosis , Adult , Computer Systems/standards , Computer Systems/trends , Connecticut/epidemiology , Crack Cocaine/therapeutic use , Female , Fentanyl/therapeutic use , Humans , Male , Middle Aged , Opiate Overdose/epidemiology , Population Surveillance/methods
3.
PLoS One ; 16(9): e0256799, 2021.
Article in English | MEDLINE | ID: mdl-34492070

ABSTRACT

BACKGROUND: Health facilities in developing countries are increasingly adopting Electronic Health Records systems (EHRs) to support healthcare processes. However, only limited studies are available that assess the actual use of the EHRs once adopted in these settings. We assessed the state of the 376 KenyaEMR system (national EHRs) implementations in healthcare facilities offering HIV services in Kenya. METHODS: The study focused on seven EHRs use indicators. Six of the seven indicators were programmed and packaged into a query script for execution within each KenyaEMR system (KeEMRs) implementation to collect monthly server-log data for each indicator for the period 2012-2019. The indicators included: Staff system use, observations (clinical data volume), data exchange, standardized terminologies, patient identification, and automatic reports. The seventh indicator (EHR variable Completeness) was derived from routine data quality report within the EHRs. Data were analysed using descriptive statistics, and multiple linear regression analysis was used to examine how individual facility characteristics affected the use of the system. RESULTS: 213 facilities spanning 19 counties participated in the study. The mean number of authorized users who actively used the KeEMRs was 18.1% (SD = 13.1%, p<0.001) across the facilities. On average, the volume of clinical data (observations) captured in the EHRs was 3363 (SD = 4259). Only a few facilities(14.1%) had health data exchange capability. 97.6% of EHRs concept dictionary terms mapped to standardized terminologies such as CIEL. Within the facility EHRs, only 50.5% (SD = 35.4%, p< 0.001) of patients had the nationally-endorsed patient identifier number recorded. Multiple regression analysis indicated the need for improvement on the mode of EHRs use of implementation. CONCLUSION: The standard EHRs use indicators can effectively measure EHRs use and consequently determine success of the EHRs implementations. The results suggest that most of the EHRs use areas assessed need improvement, especially in relation to active usage of the system and data exchange readiness.


Subject(s)
Delivery of Health Care/standards , Electronic Health Records/standards , HIV Infections/epidemiology , Health Facilities/standards , Computer Systems/standards , Female , HIV Infections/virology , Humans , Kenya/epidemiology , Male
4.
Acta Neurol Belg ; 121(2): 341-349, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33486717

ABSTRACT

Creutzfeld-Jakob disease (CJD) is a fatal neurodegenerative disease which belongs to the family of transmissible spongiform encephalopathies (TSEs), or prion diseases. Historically, CJD diagnosis has been based on the combination of clinical features and in vivo markers, including CSF protein assays, MRI and EEG changes. Brain-derived CSF proteins, such as 14-3-3, t-tau and p-tau have been largely used to support the diagnosis of probable CJD, although with certain limitations concerning sensitivity and specificity of these tests. More recently, a new method for the pre-mortem diagnosis of sporadic CJD has been developed, based on the ability of PrPsc to induce the polymerization of protease-sensitive recombinant PrP (PrPsen) into amyloid fibrils, and is known as Real-Time Quaking- Induced Conversion (RT-QuIC) assay allows the detection of > 1 fg of PrPsc in diluted CJD brain homogenate and a variety of biological tissues and fluids. In the present study, we did a meta-analysis on the liability of RT-QuIC method in the diagnosis of sporadic CJD, in comparison to 14-3-3 and Tau protein. Twelve studies were finally included in the statistical analysis which showed that RT-QuIC has a very high specificity and comparable sensitivity to 14-3-3 protein and Tau protein in the CSF, and hence can be used as a reliable biomarker for the diagnosis of sporadic CJD.


Subject(s)
Computer Systems/standards , Creutzfeldt-Jakob Syndrome/diagnostic imaging , Creutzfeldt-Jakob Syndrome/physiopathology , Encephalopathy, Bovine Spongiform/diagnostic imaging , Encephalopathy, Bovine Spongiform/physiopathology , Biomarkers/cerebrospinal fluid , Creutzfeldt-Jakob Syndrome/cerebrospinal fluid , Electroencephalography/methods , Electroencephalography/standards , Encephalopathy, Bovine Spongiform/cerebrospinal fluid , Humans , Magnetic Resonance Imaging/methods , Magnetic Resonance Imaging/standards
5.
J Med Internet Res ; 23(1): e22831, 2021 01 20.
Article in English | MEDLINE | ID: mdl-33470949

ABSTRACT

BACKGROUND: As the aging population continues to grow, the number of adults living with dementia or other cognitive disabilities in residential long-term care homes is expected to increase. Technologies such as real-time locating systems (RTLS) are being investigated for their potential to improve the health and safety of residents and the quality of care and efficiency of long-term care facilities. OBJECTIVE: The aim of this study is to identify factors that affect the implementation, adoption, and use of RTLS for use with persons living with dementia or other cognitive disabilities in long-term care homes. METHODS: We conducted a systematic review of the peer-reviewed English language literature indexed in MEDLINE, Embase, PsycINFO, and CINAHL from inception up to and including May 5, 2020. Search strategies included keywords and subject headings related to cognitive disability, residential long-term care settings, and RTLS. Study characteristics, methodologies, and data were extracted and analyzed using constant comparative techniques. RESULTS: A total of 12 publications were included in the review. Most studies were conducted in the Netherlands (7/12, 58%) and used a descriptive qualitative study design. We identified 3 themes from our analysis of the studies: barriers to implementation, enablers of implementation, and agency and context. Barriers to implementation included lack of motivation for engagement; technology ecosystem and infrastructure challenges; and myths, stories, and shared understanding. Enablers of implementation included understanding local workflows, policies, and technologies; usability and user-centered design; communication with providers; and establishing policies, frameworks, governance, and evaluation. Agency and context were examined from the perspective of residents, family members, care providers, and the long-term care organizations. CONCLUSIONS: There is a striking lack of evidence to justify the use of RTLS to improve the lives of residents and care providers in long-term care settings. More research related to RTLS use with cognitively impaired residents is required; this research should include longitudinal evaluation of end-to-end implementations that are developed using scientific theory and rigorous analysis of the functionality, efficiency, and effectiveness of these systems. Future research is required on the ethics of monitoring residents using RTLS and its impact on the privacy of residents and health care workers.


Subject(s)
Cognitive Dysfunction/therapy , Computer Systems/standards , Long-Term Care/standards , Data Analysis , Humans , Qualitative Research
6.
J Cancer Res Ther ; 16(4): 703-707, 2020.
Article in English | MEDLINE | ID: mdl-32930106

ABSTRACT

Pathologists have been using their tool of trade, "the microscope," since the early 17th century, but now diagnostic pathology or tissue-based diagnosis is characterized by its high specificity and sensitivity. Technological telecommunication advances have revolutionized the face of medicine, and in pursuit of better health-care delivery, telepathology has emerged. Telepathology is the practice of diagnostic pathology performed at a distance, with images viewed on a video monitor rather than directly through the (light) microscope. This article aims to provide an overview of the field, including specific applications, practice, benefits, limitations, regulatory issues, latest advances, and a perspective on the current status of telepathology in Indian scenario based on literature review.


Subject(s)
Computer Systems/standards , Education, Medical, Continuing/methods , Microscopy, Video/methods , Remote Consultation/methods , Telepathology/methods , Humans , India , Telepathology/standards , Telepathology/trends
7.
Burns ; 46(8): 1829-1838, 2020 12.
Article in English | MEDLINE | ID: mdl-32826097

ABSTRACT

INTRODUCTION: Early judgment of the depth of burns is very important for the accurate formulation of treatment plans. In medical imaging the application of Artificial Intelligence has the potential for serving as a very experienced assistant to improve early clinical diagnosis. Due to lack of large volume of a particular feature, there has been almost no progress in burn field. METHODS: 484 early wound images are collected on patients who discharged home after a burn injury in 48 h, from five different levels of hospitals in Hunan Province China. According to actual healing time, all images are manually annotated by five professional burn surgeons and divided into three sets which are shallow(0-10 days), moderate(11-20 days) and deep(more than 21 days or skin graft healing). These ROIs were further divided into 5637 patches sizes 224 × 224 pixels, of which 1733 shallow, 1804 moderate, and 2100 deep. We used transfer learning suing a Pre-trained ResNet50 model and the ratio of all images is 7:1.5:1.5 for training:validation:test. RESULTS: A novel artificial burn depth recognition model based on convolutional neural network was established and the diagnostic accuracy of the three types of burns is about 80%. DISCUSSION: The actual healing time can be used to deduce the depth of burn involvement. The artificial burn depth recognition model can accurately infer healing time and burn depth of the patient, which is expected to be used for auxiliary diagnosis improvement.


Subject(s)
Burns/classification , Burns/diagnostic imaging , Computer Systems/standards , Adult , Burns/epidemiology , China/epidemiology , Computer Systems/statistics & numerical data , Humans , Time Factors , Wound Healing/physiology
8.
Diabetes Care ; 43(10): 2537-2543, 2020 10.
Article in English | MEDLINE | ID: mdl-32723843

ABSTRACT

OBJECTIVE: International type 1 diabetes registries have shown that HbA1c levels are highest in young people with type 1 diabetes; however, improving their glycemic control remains a challenge. We propose that use of the factory-calibrated Dexcom G6 CGM system would improve glycemic control in this cohort. RESEARCH DESIGN AND METHODS: We conducted a randomized crossover trial in young people with type 1 diabetes (16-24 years old) comparing the Dexcom G6 CGM system and self-monitoring of blood glucose (SMBG). Participants were assigned to the interventions in random order during two 8-week study periods. During SMBG, blinded continuous glucose monitoring (CGM) was worn by each participant for 10 days at the start, week 4, and week 7 of the control period. HbA1c measurements were drawn after enrollment and before and after each treatment period. The primary outcome was time in range 70-180 mg/dL. RESULTS: Time in range was significantly higher during CGM compared with control (35.7 ± 13.5% vs. 24.6 ± 9.3%; mean difference 11.1% [95% CI 7.0-15.2]; P < 0.001). CGM use reduced mean sensor glucose (219.7 ± 37.6 mg/dL vs. 251.9 ± 36.3 mg/dL; mean difference -32.2 mg/dL [95% CI -44.5 to -20.0]; P < 0.001) and time above range (61.7 ± 15.1% vs. 73.6 ± 10.4%; mean difference 11.9% [95% CI -16.4 to -7.4]; P < 0.001). HbA1c level was reduced by 0.76% (95% CI -1.1 to -0.4) (-8.5 mmol/mol [95% CI -12.4 to -4.6]; P < 0.001). Times spent below range (<70 mg/dL and <54 mg/dL) were low and comparable during both study periods. Sensor wear was 84% during the CGM period. CONCLUSIONS: CGM use in young people with type 1 diabetes improves time in target and HbA1c levels compared with SMBG.


Subject(s)
Diabetes Mellitus, Type 1/blood , Glycated Hemoglobin/metabolism , Glycemic Control , Adolescent , Adult , Blood Glucose/drug effects , Blood Glucose/metabolism , Blood Glucose Self-Monitoring/instrumentation , Blood Glucose Self-Monitoring/standards , Calibration , Cohort Studies , Computer Systems/standards , Cross-Over Studies , Diabetes Mellitus, Type 1/drug therapy , Diabetes Mellitus, Type 1/ethnology , Female , Glycated Hemoglobin/analysis , Glycated Hemoglobin/drug effects , Glycemic Control/instrumentation , Glycemic Control/methods , Glycemic Control/standards , Humans , Insulin/administration & dosage , Insulin Infusion Systems/standards , Male , Patient Care Planning , Time Factors , United Kingdom/epidemiology , Young Adult
10.
Methods Mol Biol ; 2127: 227-244, 2020.
Article in English | MEDLINE | ID: mdl-32112326

ABSTRACT

Cryo-electron microscopy (cryo-EM) is a powerful tool for investigating the structure of macromolecules under near-native conditions. Especially in the context of membrane proteins, this technique has allowed researchers to obtain structural information at a previously unattainable level of detail. Specimen preparation remains the bottleneck of most cryo-EM research projects, with membrane proteins representing particularly challenging targets of investigation due to their universal requirement for detergents or other solubilizing agents. Here we describe preparation of negative staining and cryo-EM grids and downstream data collection of membrane proteins in detergent, by far the most common solubilization agent. This protocol outlines a quick and straightforward procedure for screening and determining the structure of a membrane protein of interest under biologically relevant conditions.


Subject(s)
Cryoelectron Microscopy/methods , Data Collection/methods , Detergents/pharmacology , Membrane Proteins/chemistry , Animals , Calibration , Computer Systems/standards , Cryoelectron Microscopy/instrumentation , Cryoelectron Microscopy/standards , Data Collection/standards , Detergents/chemistry , Humans , Membrane Proteins/drug effects , Membrane Proteins/isolation & purification , Microscopy, Electron, Transmission/instrumentation , Microscopy, Electron, Transmission/methods , Microscopy, Electron, Transmission/standards , Negative Staining/instrumentation , Negative Staining/methods , Negative Staining/standards , Protein Denaturation/drug effects , Specimen Handling/instrumentation , Specimen Handling/methods
11.
J Orthop Surg Res ; 15(1): 103, 2020 Mar 11.
Article in English | MEDLINE | ID: mdl-32160894

ABSTRACT

BACKGROUND: To explore the feasibility to identify malignant musculoskeletal soft tissue tumors using real-time shear wave elastography (rtSWE). METHODS: One hundred fifteen musculoskeletal soft tissue tumors in 92 consecutive patients were examined using both conventional ultrasonography (US) and rtSWE. For each patient, the rtSWE parameters including maximum elasticity (Emax), mean elasticity (Emean), minimum elasticity (Emin), standard deviation of the elasticity (Esd), and rtSWE image pattern were obtained. Eighty-one histopathologically confirmed tumors from 73 patients were subjected to analysis. RESULTS: The 81 lesions included in the study were histopathologically classified as malignant (n = 21) or benign (n = 60). The statistically significant differences between benign and malignant lesions were found in conventional US characters including size, depth, margin, echogenicity, mass texture, and power Doppler signal. Meanwhile, the significant differences were also found in quantitative rtSWE findings including Emax, Emean, Emin, and Esd values and in qualitative rtSWE parameter named rtSWE image pattern. Multivariate analysis showed that infiltrative margin (OR, 4.470), and size (OR, 1.046) were independent predictors for malignancy in US findings, while Esd value (OR, 9.047) was independent predictors for malignancy in quantitative rtSWE parameters. Areas under the ROC curve (Azs) for US features, Esd value, and rtSWE image pattern were 0.851, 0.795, and 0.792, respectively. CONCLUSIONS: Conventional US and quantitative and qualitative rtSWE parameters are useful for malignancy prediction of musculoskeletal soft tissue tumors. rtSWE can be used to supplement conventional US to diagnose musculoskeletal soft tissue tumors.


Subject(s)
Computer Systems/standards , Elasticity Imaging Techniques/standards , Soft Tissue Neoplasms/diagnostic imaging , Ultrasonography, Doppler/standards , Adolescent , Adult , Aged , Aged, 80 and over , Child , Diagnosis, Differential , Elasticity Imaging Techniques/methods , Female , Humans , Male , Middle Aged , Reproducibility of Results , Ultrasonography, Doppler/methods , Young Adult
12.
PLoS One ; 15(2): e0228434, 2020.
Article in English | MEDLINE | ID: mdl-32027668

ABSTRACT

The service quality and system dependability of real-time communication networks strongly depends on the analysis of monitored data, to identify concrete problems and their causes. Many of these can be described by either their structural or temporal properties, or a combination of both. As current research is short of approaches sufficiently addressing both properties simultaneously, we propose a new feature space specifically suited for this task, which we analyze for its theoretical properties and its practical relevance. We evaluate its classification performance when used on real-world data sets of structural-temporal mobile communication data, and compare it to the performance achieved of feature representations used in related work. For this purpose we propose a system which allows the automatic detection and prediction of classes of pre-defined sequence behavior, greatly reducing costs caused by the otherwise required manual analysis. With our proposed feature spaces this system achieves a precision of more than 93% at recall values of 100%, with an up to 6.7% higher effective recall than otherwise similarly performing alternatives, notably outperforming alternative deep learning, kernel learning and ensemble learning approaches of related work. Furthermore the supported system calibration allows separating reliable from unreliable predictions more effectively, which is highly relevant for any practical application.


Subject(s)
Communication , Deep Learning , Machine Learning , Neural Networks, Computer , Computer Systems/standards , Data Accuracy , Data Mining/methods , Data Mining/standards , Datasets as Topic/standards , Humans , Mobile Applications/standards , Mobile Applications/statistics & numerical data , Reproducibility of Results , Time Factors , Validation Studies as Topic
13.
J Dairy Sci ; 103(4): 3856-3866, 2020 Apr.
Article in English | MEDLINE | ID: mdl-31864744

ABSTRACT

We are developing a real-time, data-integrated, data-driven, continuous decision-making engine, The Dairy Brain, by applying precision farming, big data analytics, and the Internet of Things. This is a transdisciplinary research and extension project that engages multidisciplinary scientists, dairy farmers, and industry professionals. Dairy farms have embraced large and diverse technological innovations such as sensors and robotic systems, and procured vast amounts of constant data streams, but they have not been able to integrate all this information effectively to improve whole-farm decision making. Consequently, the effects of all this new smart dairy farming are not being fully realized. It is imperative to develop a system that can collect, integrate, manage, and analyze on- and off-farm data in real time for practical and relevant actions. We are using the state-of-the-art database management system from the University of Wisconsin-Madison Center for High Throughput Computing to develop our Agricultural Data Hub that connects and analyzes cow and herd data on a permanent basis. This involves cleaning and normalizing the data as well as allowing data retrieval on demand. We illustrate our Dairy Brain concept with 3 practical applications: (1) nutritional grouping that provides a more accurate diet to lactating cows by automatically allocating cows to pens according to their nutritional requirements aggregating and analyzing data streams from management, feed, Dairy Herd Improvement (DHI), and milking parlor records; (2) early risk detection of clinical mastitis (CM) that identifies first-lactation cows under risk of developing CM by analyzing integrated data from genetic, management, and DHI records; and (3) predicting CM onset that recognizes cows at higher risk of contracting CM, by continuously integrating and analyzing data from management and the milking parlor. We demonstrate with these applications that it is possible to develop integrated continuous decision-support tools that could potentially reduce diet costs by $99/cow per yr and that it is possible to provide a new dimension for monitoring health events by identifying cows at higher risk of CM and by detecting 90% of CM cases a few milkings before disease onset. We are securely advancing toward our overarching goal of developing our Dairy Brain. This is an ongoing innovative project that is anticipated to transform how dairy farms operate.


Subject(s)
Big Data , Computer Systems , Dairying/methods , Decision Making , Mastitis, Bovine/diagnosis , Animals , Cattle , Cattle Diseases/diagnosis , Cattle Diseases/genetics , Cattle Diseases/physiopathology , Computer Systems/standards , Dairying/economics , Dairying/statistics & numerical data , Diet/veterinary , Female , Humans , Lactation , Longitudinal Studies , Mastitis, Bovine/genetics , Mastitis, Bovine/physiopathology , Milk/economics , Nutritional Requirements
14.
PLoS One ; 14(8): e0220135, 2019.
Article in English | MEDLINE | ID: mdl-31369592

ABSTRACT

SPEC CPU is one of the most common benchmark suites used in computer architecture research. CPU2017 has recently been released to replace CPU2006. In this paper we present a detailed evaluation of the memory hierarchy performance for both the CPU2006 and single-threaded CPU2017 benchmarks. The experiments were executed on an Intel Xeon Skylake-SP, which is the first Intel processor to implement a mostly non-inclusive last-level cache (LLC). We present a classification of the benchmarks according to their memory pressure and analyze the performance impact of different LLC sizes. We also test all the hardware prefetchers showing they improve performance in most of the benchmarks. After comprehensive experimentation, we can highlight the following conclusions: i) almost half of SPEC CPU benchmarks have very low miss ratios in the second and third level caches, even with small LLC sizes and without hardware prefetching, ii) overall, the SPEC CPU2017 benchmarks demand even less memory hierarchy resources than the SPEC CPU2006 ones, iii) hardware prefetching is very effective in reducing LLC misses for most benchmarks, even with the smallest LLC size, and iv) from the memory hierarchy standpoint the methodologies commonly used to select benchmarks or simulation points do not guarantee representative workloads.


Subject(s)
Algorithms , Benchmarking , Computer Systems/standards , Computers/standards , Software
15.
J Med Internet Res ; 21(1): e9076, 2019 01 14.
Article in English | MEDLINE | ID: mdl-31344680

ABSTRACT

BACKGROUND: One of the essential elements of a strategic approach to improving patients' experience is to measure and report on patients' experiences in real time. Real-time feedback (RTF) is increasingly being collected using digital technology; however, there are several factors that may influence the success of the digital system. OBJECTIVE: The aim of this review was to evaluate the digital maturity and patient acceptability of real-time patient experience feedback systems. METHODS: We systematically searched the following databases to identify papers that used digital systems to collect RTF: The Cochrane Library, Global Health, Health Management Information Consortium, Medical Literature Analysis and Retrieval System Online, EMBASE, PsycINFO, Web of Science, and CINAHL. In addition, Google Scholar and gray literature were utilized. Studies were assessed on their digital maturity using a Digital Maturity Framework on the basis of the following 4 domains: capacity/resource, usage, interoperability, and impact. A total score of 4 indicated the highest level of digital maturity. RESULTS: RTF was collected primarily using touchscreens, tablets, and Web-based platforms. Implementation of digital systems showed acceptable response rates and generally positive views from patients and staff. Patient demographics according to RTF responses varied. An overrepresentation existed in females with a white predominance and in patients aged ≥65 years. Of 13 eligible studies, none had digital systems that were deemed to be of the highest level of maturity. Three studies received a score of 3, 2, and 1, respectively. Four studies scored 0 points. While 7 studies demonstrated capacity/resource, 8 demonstrated impact. None of the studies demonstrated interoperability in their digital systems. CONCLUSIONS: Patients and staff alike are willing to engage in RTF delivered using digital technology, thereby disrupting previous paper-based feedback. However, a lack of emphasis on digital maturity may lead to ineffective RTF, thwarting improvement efforts. Therefore, given the potential benefits of RTF, health care services should ensure that their digital systems deliver across the digital maturity continuum.


Subject(s)
Computer Systems/standards , Health Services/standards , Feedback , Female , Humans , Male
16.
J Med Internet Res ; 21(7): e13719, 2019 07 05.
Article in English | MEDLINE | ID: mdl-31278734

ABSTRACT

BACKGROUND: The rapid deterioration observed in the condition of some hospitalized patients can be attributed to either disease progression or imperfect triage and level of care assignment after their admission. An early warning system (EWS) to identify patients at high risk of subsequent intrahospital death can be an effective tool for ensuring patient safety and quality of care and reducing avoidable harm and costs. OBJECTIVE: The aim of this study was to prospectively validate a real-time EWS designed to predict patients at high risk of inpatient mortality during their hospital episodes. METHODS: Data were collected from the system-wide electronic medical record (EMR) of two acute Berkshire Health System hospitals, comprising 54,246 inpatient admissions from January 1, 2015, to September 30, 2017, of which 2.30% (1248/54,246) resulted in intrahospital deaths. Multiple machine learning methods (linear and nonlinear) were explored and compared. The tree-based random forest method was selected to develop the predictive application for the intrahospital mortality assessment. After constructing the model, we prospectively validated the algorithms as a real-time inpatient EWS for mortality. RESULTS: The EWS algorithm scored patients' daily and long-term risk of inpatient mortality probability after admission and stratified them into distinct risk groups. In the prospective validation, the EWS prospectively attained a c-statistic of 0.884, where 99 encounters were captured in the highest risk group, 69% (68/99) of whom died during the episodes. It accurately predicted the possibility of death for the top 13.3% (34/255) of the patients at least 40.8 hours before death. Important clinical utilization features, together with coded diagnoses, vital signs, and laboratory test results were recognized as impactful predictors in the final EWS. CONCLUSIONS: In this study, we prospectively demonstrated the capability of the newly-designed EWS to monitor and alert clinicians about patients at high risk of in-hospital death in real time, thereby providing opportunities for timely interventions. This real-time EWS is able to assist clinical decision making and enable more actionable and effective individualized care for patients' better health outcomes in target medical facilities.


Subject(s)
Computer Systems/standards , Electronic Health Records/standards , Machine Learning/standards , Monitoring, Physiologic/methods , Mortality/trends , Risk Assessment/methods , Algorithms , Female , Humans , Inpatients , Male , Middle Aged , Prospective Studies , Retrospective Studies , Risk Factors
17.
Neural Netw ; 117: 152-162, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31170575

ABSTRACT

Stochastic computing (SC) is a promising computing paradigm that can help address both the uncertainties of future process technology and the challenges of efficient hardware realization for deep neural networks (DNNs). However the impreciseness and long latency of SC have rendered previous SC-based DNN architectures less competitive against optimized fixed-point digital implementations, unless inference accuracy is significantly sacrificed. In this paper we propose a new SC-MAC (multiply-and-accumulate) algorithm, which is a key building block for SC-based DNNs, that is orders of magnitude more efficient and accurate than previous SC-MACs. We also show how our new SC-MAC can be extended to a vector version and used to accelerate both convolution and fully-connected layers of convolutional neural networks (CNNs) using the same hardware. Our experimental results using CNNs designed for MNIST and CIFAR-10 datasets demonstrate that not only is our SC-based CNNs more accurate and 40∼490× more energy-efficient for convolution layers than conventional SC-based ones, but ours can also achieve lower area-delay product and lower energy compared with precision-optimized fixed-point implementations without sacrificing accuracy. We also demonstrate the feasibility of our SC-based CNNs through FPGA prototypes.


Subject(s)
Computer Systems/standards , Neural Networks, Computer , Computer Systems/economics , Stochastic Processes
18.
J Med Syst ; 43(5): 133, 2019 Apr 03.
Article in English | MEDLINE | ID: mdl-30945011

ABSTRACT

Now-a-days, the society is witnessing a keen urge to enhance the quality of healthcare services with the intervention of technology in the health sector. The main focus in transforming traditional healthcare to smart healthcare is on facilitating the patients as well as medical professionals. However, this changover is not easy due to various issues of security and integrity associated with it. Security of patients's personal health record and privacy can be handled well by permitting only authorized access to the confidential health-data via suitably designed authentication scheme. In pursuit to contribute in this direction, we came across the role of Universal Serial Bus (USB), the most widely accepted interface, in enabling communication between peripheral devices and a host controller like laptop, personal computer, smart phone, tablet etc. In the process, we analysed a recently proposed a three-factor authentication scheme for consumer USB Mass Storage Devices (MSD) by He et al. In this paper, we demonstrate that He et al.'s scheme is vulnerable to leakage of temporary but session specific information attacks, late detection of message replay, forward secrecy attacks, and backward secrecy attacks. Then motivated with the benefits of USB, we propose a secure three-factor authentication scheme for smart healthcare.


Subject(s)
Computer Security/standards , Computer Systems/standards , Health Information Exchange/standards , Communication , Confidentiality , Electronic Health Records/standards , Humans
19.
Infant Behav Dev ; 54: 114-119, 2019 02.
Article in English | MEDLINE | ID: mdl-30660858

ABSTRACT

Infant looking-time paradigms often use specialized software for real time manual coding of infant gaze. Here, I introduce PyHab, the first open-source looking-time coding and stimulus presentation solution designed specifically with open science in mind. PyHab is built on the libraries of PsychoPy (Peirce, 2007). PyHab has its own graphical interface for building studies and requires no programming experience to use. When creating a study, PyHab saves a folder that contains all of the code required to run the study and all of the stimuli, making each experiment a self-contained, easily shared package. This feature guarantees the ability to replicate a PyHab experiment exactly as it was run. PyHab also has several features designed to support rigorous methodology, such as experimenter blinding and automatic stimulus control. In addition, because it is open-source, PyHab can be modified and improved not only by its developer but by anyone who knows Python.


Subject(s)
Computer Systems/standards , Fixation, Ocular/physiology , Infant Behavior/physiology , Photic Stimulation/methods , Software/standards , Humans , Infant , Infant Behavior/psychology
SELECTION OF CITATIONS
SEARCH DETAIL
...